This audio is presented by the University of Erlangen-Nürnberg.
And to be honest, already on Tuesday I felt pretty bad.
So right after I left the lecture I had to stay in bed quite for some time.
But now I'm much better, so I'm back to holding lectures.
So, I'm not sure, okay, so this is already on.
So let's hide this a bit.
Leonard gave the first day lecture and he told me he was a bit fast.
And maybe we want to go through the important points of the structured tensor in Wesselness.
So maybe it was a bit too fast.
Did everybody understand the concept of structured tensor Wesselness?
No?
So should we go through it on a, let's say, let's not go into all the details,
but at least let's go through the concepts, right?
So one thing that probably wasn't that hard is if you want to figure out where edges in an image are,
you have to look at the gradient.
So this was the first thing you should have discussed, edges and gradients.
If you want to detect edges in an image, you see the edges as a high intensity change.
So an edge in an image, you should be able to, there will be something like this, right?
And if you want to build an edge detector, you have to look at the changes in the image.
And what you can do is you can compute the gradient.
And if you now have a, so this would be a steep change downwards, so you would have a negative gradient.
So you probably get something like this, and then you get back to zero.
This would be your gradient.
And you have discussed the problem is we have discrete, so typically if you look at the definition of the derivative,
you want to make infinitesimally small steps in order to compute the derivative.
But this is a discrete image.
The smallest step that you can take is essentially one.
So what you do instead is you can look at neighboring pixels and just compute the difference.
And you looked at different options, so there are several options how to implement this.
So you can implement it either as a forward difference.
You can implement it as a backward difference and the central difference.
So you've seen you can take two neighboring pixels and just subtract them,
and that will give you an estimate of the derivative.
But you can also take one pixel left and one pixel right and then subtract the two,
and this will also give you an estimate of the derivative.
So these are different choices that you can do to compute a discrete derivative.
Of course, this is suboptimal, and what you want to do is if you want to compute like higher order derivatives,
also with the first derivative, you get into trouble with noise,
because the derivation operation will emphasize the high frequencies,
and you typically have noise in the high frequencies, so this will emphasize noises.
One thing that you can do, one trick that is often used, for example, in Kenny edge detector,
is that they compute the derivative with respect to one direction and a smoothing operation in the other direction.
This way you can get more stable gradients.
So this is a lot of implementation details that you can do about that, but that is typically a very good trick.
Generally, all of these things you can implement as a numerical derivative operation,
so you can implement as convolutions.
So the nice thing is now if you want to compute a derivative like this,
what you essentially need to do is you have to take the left value and the right value and subtract the left value.
So if you do that, then you will get a negative response here.
So let's say this is x i plus one and this is x i, so this is a discrete word, so these are the discrete steps.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:25:05 Min
Aufnahmedatum
2016-04-26
Hochgeladen am
2016-04-26 18:06:51
Sprache
en-US
This lecture focuses on recent developments in image processing driven by medical applications. All algorithms are motivated by practical problems. The mathematical tools required to solve the considered image processing tasks will be introduced.